我们建议一个基于深入强化学习的经理工作框架,以解决旅行推销员问题(TSP)的艰难而又非平凡的变体,\ ie〜有时间窗口和拒绝(MTSPTWR)的多车辆TSP(MTSPTWR),在此之前无法服务的客户截止日期将受到拒绝。特别是,在拟议的框架中,经理代理人通过基于图形同构网络(GIN)的策略网络将客户分配给每辆车,从而将MTSPTWR分为子路由任务。工人代理人通过根据每辆车的旅行长度和拒绝率来最大程度地降低成本来解决子路由任务,然后将其最多的最大值送回经理代理以学习更好的任务。实验结果表明,所提出的框架在更高的解决方案质量和较短的计算时间方面优于强基础。更重要的是,训练有素的代理商还取得了竞争性能,以解决看不见的较大实例。
translated by 谷歌翻译
基于预训练的深层模型的图像恢复方案由于解决各种反问题的独特灵活性,因此受到了极大的关注。尤其是,插件播放(PNP)框架是一种流行而强大的工具,可以将现成的深层Denoiser集成,以与已知的观察模型一起,以用于不同的图像恢复任务。但是,在实践中,获得与实际情况完全匹配的观察模型可能具有挑战性。因此,带有常规深地位者的PNP方案可能无法在某些现实世界图像恢复任务中产生令人满意的结果。我们认为,通过使用经过确定性优化训练的现成的深层DENOISER,PNP框架的鲁棒性在很大程度上受到限制。为此,我们提出了一种新颖的深钢筋学习(DRL),以称为Repnp的PNP框架,通过利用基于轻巧的DRL的DENOISER来制定可靠的图像恢复任务。实验结果表明,所提出的REPNP对与实际情况的PNP方案中使用的观察模型具有鲁棒性。因此,RepNP可以为图像脱张和超级分辨率任务生成更可靠的恢复结果。与几个最先进的深层图像恢复基线相比,RepNP可以通过更少的模型参数实现更好的模型偏差的结果。
translated by 谷歌翻译
单图级注释仅正确地描述了图像内容的通常很小的子集,尤其是在描绘了复杂的现实世界场景时。尽管这在许多分类方案中可能是可以接受的,但对于培训时间和测试时间之间有很大差异的应用程序,它构成了一个重大挑战。在本文中,我们仔细研究了$ \ textit {少数图解} $的含义。将输入样品分成贴片并通过视觉变压器的帮助来编码它们,使我们能够在图像跨图像和独立于其各自类别的局部区域之间建立语义对应关系。然后,最有用的补丁程序嵌入手头的任务是通过推理时通过在线优化设置的支持的函数,此外还提供了图像中“ $ \ textit {最重要的} $”的视觉解释性。我们基于通过掩盖图像建模对网络进行无监督培训的最新进展,以克服缺乏细粒度的标签,并了解数据的更一般统计结构,同时避免使用负面图像级注释影响,$ \ textit {aka} $监督坍塌。实验结果表明,我们的方法的竞争力,在四个流行的少数几个分类基准测试基准中获得了新的最先进的结果,价格为$ 5 $ - 弹跳和$ 1 $ $ - 景点。
translated by 谷歌翻译
学习和概括与少数样本(少量学习)的新概念仍然是对现实世界应用的重要挑战。实现少量学习的原则方法是实现一种可以快速适应给定任务的上下文的模型。已经显示动态网络能够有效地学习内容自适应参数,使其适用于几次学习。在本文中,我们建议将卷积网络的动态内核作为手掌的任务的函数学习,从而实现更快的泛化。为此,我们基于整个任务和每个样本获得我们的动态内核,并在每个单独的频道和位置进行进一步调节机制。这导致动态内核,同时考虑可用的微型信息。我们经验证明,我们的模型在几次拍摄分类和检测任务上提高了性能,实现了几种基线模型的切实改进。这包括最先进的结果,以4次拍摄分类基准:迷你想象,分层 - 想象成,幼崽和FC100以及少量检测数据集的竞争结果:Coco-Pascal-VOC。
translated by 谷歌翻译
从有限的例子中学习和推广,我,e,几次拍摄的学习,对许多真实世界视觉应用的核心重要性是核心重要性。实现少量学习的主要方法是实现来自不同类别的样本是独特的嵌入的嵌入。最近的研究表明,通过双曲线几何嵌入较低的分层和结构化数据,使其适合几次拍摄的学习。在本文中,我们建议学习上下文知识的双曲标准,以表征与学习集合的点与设置距离相关联的点之间的距离。为此,我们将度量标准作为双曲线空间的切线束上的加权总和,并制定自适应地并基于点的星座获得重量的机制。这不仅使得公制本地,而且依赖于手头的任务,这意味着度量根据它比较的样本。我们经验证明,这种度量在异常值存在下产生鲁棒性,并实现基线模型的切实改善。这包括五个流行的少量分类基准,即迷你想象,分层 - 想象成,CALTECH-UCSD鸟-200-2011(幼崽),CIFAR-FS和FC100的最先进的结果。
translated by 谷歌翻译
跟踪需要为推理阶段构建目标的判别模型。实现这一目标的有效方法是在线学习,可以舒适地占据截肢培训的型号。最近的研究表明,由于其像素级别歧视,视觉跟踪从统一视觉跟踪和分割的统一中受益匪浅。但是,对这种统一模型进行在线学习产生巨大挑战。分段模型不能轻易地从视觉跟踪方案中给出的先前信息学习。在本文中,我们提出了TrackM1P:一种新的元学习方法,优化了仅从部分信息学习以解决强加的挑战。我们的模型能够广泛利用有限的事先信息,因此具有比其他在线学习方法更强大的目标 - 背景辨别性。凭经验,我们表明我们的模型在竞争模型上实现了最先进的性能和切实改善。我们的模式实现了VOT2019,VOT2018,VOT2018和VOT2016数据集的66.0%,67.1%,68.5%的平均重叠增长了6.4%,7.3%,高于我们基线的6.4%。代码将公开可用。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译